In linear algebra, a coordinate vector is an explicit representation of a Euclidean vector in an abstract vector space as an ordered list of numbers or, equivalently, as an element of the coordinate space Fn. Coordinate vectors allow calculations with abstract objects to be transformed into calculations with blocks of numbers (matrices, column vectors and row vectors).
The idea of a coordinate vector can also be used for infinite dimensional vector spaces, as addressed below.
Contents |
Let V be a vector space of dimension n over a field F and let
be an ordered basis for V. Then for every there is a unique linear combination of the basis vectors that equals v:
The linear independence of vectors in the basis ensures that the α-s are determined uniquely by v and B. Now, we define the coordinate vector of v relative to B to be the following sequence of coordinates:
This is also called the representation of v with respect of B, or the B representation of v. The α-s are called the coordinates of v. The order of the basis becomes important here, since it determines the order in which the coefficients are listed in the coordinate vector.
Coordinate vectors of finite dimensional vector spaces can be represented as elements of a column or row vector. This depends on the author's intention of performing linear transformations by matrix multiplication on the left (pre-multiplication) or on the right (post-multiplication) of the vector. A column vector of length n can be pre-multiplied by any matrix with n columns, while a row vector of length n can be post-multiplied by any matrix with n rows.
For instance, a transformation from basis B to basis C may be obtained by pre-multiplying the column vector by a square matrix (see below), resulting in a column vector :
If is a row vector instead of a column vector, the same basis transformation can be obtained by post-multiplying the row vector by the transposed matrix to obtain the row vector :
We can mechanize the above transformation by defining a function , called the standard representation of V with respect to B, that takes every vector to its coordinate representation: . Then is a linear transformation from V to Fn. In fact, it is an isomorphism, and its inverse is simply
Alternatively, we could have defined to be the above function from the beginning, realized that is an isomorphism, and defined to be its inverse.
Let P4 be the space of all the algebraic polynomials in degree less than 4 (i.e. the highest exponent of x can be 3). This space is linear and spanned by the following polynomials:
matching
then the corresponding coordinate vector to the polynomial
According to that representation, the differentiation operator d/dx which we shall mark D will be represented by the following matrix:
Using that method it is easy to explore the properties of the operator: such as invertibility, hermitian or anti-hermitian or none, spectrum and eigenvalues and more.
The Pauli matrices which represent the spin operator when transforming the spin eigenstates into vector coordinates.
Let B and C be two different bases of a vector space V, and let us mark with the matrix which has columns consisting of the C representation of basis vectors b1, b2, ..., bn:
This matrix is referred to as the basis transformation matrix from B to C, and can be used for transforming any vector v from a B representation to a C representation, according to the following theorem:
If E is the standard basis, the transformation from B to E can be represented with the following simplified notation:
where
The matrix M is an invertible matrix and M-1 is the basis transformation matrix from C to B. In other words,
Suppose V is an infinite dimensional vector space over a field F. If the dimension is κ, then there is some basis of κ elements for V. After an order is chosen, the basis can be considered an ordered basis. The elements of V are finite linear combinations of elements in the basis, which give rise to unique coordinate representations exactly as described before. The only change is that the indexing set for the coordinates is not finite. Since a given vector v is a finite linear combination of basis elements, the only nonzero entries of the coordinate vector for v will be the nonzero coefficients of the linear combination representing v. Thus the coordinate vector for v is zero except in finitely many entries.
The linear transformations between (possibly) infinite dimensional vector spaces can be modeled, analogously to the finite dimensional case, with infinite matrices. The special case of the transformations from V into V is described in the full linear ring article.